Search Results: "Patrick Schoenfeld"

23 October 2009

Patrick Schoenfeld: What an "Intel atom inside" sticker could make of you

So I've got this cool Intel Atom-Board Intel Essential Series D945GSEJT, which is the first Atom-ITX-board that doesn't feature a 25W chipset for a 2W CPU. Its pretty cool, but one thing made me laugh.
Today I saw that there came a "Intel atom inside" sticker with the board. I thought: Well, nice, now I could - if I wanted - put this on my Mini-ITX-system. But then I saw whats been written next to this: "Use of the enclosed Intel Atom logo label is unauthorized and constitutes infringement of Intel's exclusive trademark rights unless you have signed the Intel Atom logo trademark license".

Isn't it nice of Intel to supply me with a sticker which would make me a criminal if I'd use it?

20 October 2009

Patrick Schoenfeld: Cool PHP-Code.

Did you know that in PHP you can write something like that:

$test = "foobar";
$test = sTr_RePlace("bar", "baz", $test);
$x = sPrinTf("%s is strange.", $test);

pRint $x . "\n";
eCho "foo";
What makes me frighten is that this actually used, e.g. by this code snippet which exists (in a similar form) in an unnamed PHP-project:

echo sPrintF(_("Bla bla bla: %s"), $bla))

And yes, they do use echo to output the result of sprintf.

Update: So i got this great comment. The commentator wants to point out that the senseless use of "echo sprintf" is because of gettext. He says "That's simply the way you use gettext." But this is simply not true. The difference between printf and sprintf is that the first one outputs the string, while the second one returns it. That means that in the above example printf could be used (instead of sprintf) without a useless echo call in front of it. The reason for using sprintf (and eventually the reason because you find it in a lot of applications using gettext) is that you can use it to fill a variable with the translated string or to use the string in-place. A common use-case for this is to handle a translated string to a template engine, for example.

14 October 2009

Patrick Schoenfeld: PHP and the great "===" operator

Lets suppose you've got an array with numeric indices:
[0] => 'bla'
[1] => 'blub'
Now you want to do something if the element 'bla' is found in that array.
Well, you know that PHP has a function array_search, which returns the key(s) of the found values. Lets say, you write something like this:
if(array_search($array, 'bla')) do_something

Would you expect that do_something would actually do something?
If yes: You are wrong.
If no: Great, you've understand some parts of PHP - insanitygoodness..

Actually, if 'bla' wouldn't have the index 0 it would work, because 1, 2, 3, 4, etc. is TRUE. But unfortunately PHP has some sort of implicit casting, which makes 0 behave like a FALSE, depending on the context. So following this, the if() works for all elements except 0.

You might be tempted to write
if(array_search... != FALSE)
But this wouldn't help you, because 0 would still evaluate to FALSE, leading to if(FALSE != FALSE) which is (hopefully obviously) never true.

A PHP beginner (or even an intermediate, if he never stumbled across this case) might ask:
Whats the solution for this dilemma?

Luckily the PHP documentation is great. It tells you about this. And additionally PHP has got this great operator (===), which causes people to ask "WTF?" when they hear about it for the first time. Additionaly to comparing the values to each other, they also check the type of the variable. This leads to the wanted result because 0 is an integer, while FALSE is Boolean. So the solution for our problem looks like this:
if (array_search($array, 'bar') !== FALSE)
Isn't this great?

7 October 2009

Patrick Schoenfeld: Living in a PC

I had a time in my life when I thought that computer modding is cool. After that I came to the conclusion, that it is just a waste of time. But this one is awesome. Really.

6 October 2009

Patrick Schoenfeld: To the rescue, git is here!

Consider the following scenario:

You work on a project for a customer that is handled in a Subversion Repository.
The work you get comes in form of some tickets. Tickets may affect different parts
of the project, but they could also affect the same parts of the project. Testing of your work
is done by a different person. For some reasons its wanted that the fix for each ticket
is committed in only one commit in the subversion repository and only after it has been
tested. Commits for two tickets changing the same files should not be mixed.

Now: How do you avoid a mess when working with several patches that possibly affect the same files?

The answer is: git with git-svn. Currently my workflow looks the following:

  1. Create a branch for each ticket
  2. Make my changes for the ticket in this branch
  3. Create a patch from the changes in this branch and supply the tester with it
  4. Wait for feedback and eventually repeat steps 2-4
  5. When testing is finished, run git rebase -i master in the branch, now squash each commit into one commit, build a proper commit message from the template git provides.
  6. Switch to master branch and merge the changes from the branch.
  7. Rebase master against the latest svn (git svn rebase) and dcommit
That workflow works good for me. It gives me the possibility to do micro-commits as much as I want and still only commit a well-defined, well-tested commit into the projects SVN.
Just some minor drawbacks I haven't yet solved (you know, git /can/ do everything, but that is actually a problem in itself):
For now I cannot tell how good merging works, as soon as conflicts arise. But I guess git can do well, although there is the possibility that it could get complex again.

Nevertheless, its probably not a credit to git, but instead to DVCSes in common, but anyway. I like it.

31 August 2009

Patrick Schoenfeld: How to not make manpages

% lintian $unnamed.changes grep manpage wc -l
1312

Congratulations to the the unnamed upstream.

31 March 2009

Sven Mueller: Link collection 2009/03

Well, I normally despise of thinks like this link collection, but I thought I might add it anyway, since these are useful links for me and if I don t post them here, I m likely to forget where to find them in the near future:
  1. Sean Finney has a nice post about storing the list of parameters a (shell) script got in a way that it can be restored later. Quite handy if your script walks through the arguments parsing them (and consuming them while doing so) but you want to be able to display them in a helpful way if the parsing fails at some point.
  2. A while ago, Ingo J rgensmann had a post that helps retrieving files from lost+found after a filesystem check, provided that you run his helper script on a regular basis. The same approach can also be used if you have a backup of all files, but lost the sorting work you did after the backup was done. This is possible because running the script can be done more often then you would normally do backups.
  3. He also has a small post about mtr oddities when IPv6 comes into play
  4. Adeodato Sim wrote about databases and when timestamps that store the timezone information really are more useful then timestamps that don t.
  5. Adeodato also has a short post on using ssh as a socks proxy, which can be quite handy if you are behind a firewall.
Update: Fixed link to Ingo s file retrieval from lost+found article. Thanks to Patrick Schoenfeld who pointed me at the wrong link.
Also thanks to the anonymous poster who found an alternative way to store and (in a way) restore commandline parameters. The solution doesn t work in an as general way as that by Sean Finney et al., but it is much shorter and therefore interesting for where it can be used (when you control how commandline parameters are processed). See comments on this post for details.

28 March 2009

Patrick Schoenfeld: Arcor EasyBox A801

Recently, my girl friend and I decided to get Arcor DigitalTV. So we got some hardware from Arcor (and unfortunately we needed to "upgrade" our real-ISDN-connection to a SIP-based-one):

That second piece of hardware is kinda interesting, because it is an ADSL2+-Modem, (WLAN-)Router, SIP-Gateway and file-/printserver (it has one USB port which can be used to connect an USB hub and up to 4 disks or 1 printer) and 4-Port Switch in one (I think it doesn't include whats called a 'Splitter', because with this kind of connection there is no ISDN signal to be splitted from the internet signal).

Unfortunately not all is well about this (or about my provider). This piece of hardware can be configured with a modem installation code. Which is good for an average user, because he does not need to care about anything. With this process the hardware configures itself by receiving configuration data from a configuration server of the provider. Regardless about other implications this might have: What disturbed me about this is, that I loose control about my router, if I use this. Because when configured this way some sites in the router are grayed out. If you click them you get a message that this setting is controlled by your provider. Uarg. Well, unfortunately IPTV from arcor does not simply use the same internet connection as I usually do (more about this later), so I were suffering from missing informations about how to configure the router manually.
Arcor wasn't particular helpful in this, because they told us "Der Router kann nicht manuell konfiguriert werden." (which means in Englisch: "The router cannot be configured manually."). Although this brought me to laugh, I were also disappointed and worried about this. So I decided to find it out myself. I won't tell what I tried after all, but in the end I found out that the modem installation code-installation does configure:

With this knowledge (and some good guessing, e.g. that both PPPoE links eventually use the same user data) I were able to configure the router fully myself, which also enables me to set QoS settings how I want them (which is why I went through the whole torture at all, because SSH was extremely laggish when watching TV). One thing is notable however, how I got the information:

The manufacturer of the router seems to do some things to protect the settings from the user. For example: The page for configuring the WAN is called wan_main.stm, it can be called without trouble if you are configureing the router manually, but the system would block access when you use the modem installation code. BUT and thats how I got the info that the third link is a MAC Encapsulation Link: The status page (where it shows that you are connected etc.) includes javascript vars for nearly everything you ever wanted to know about your router configuration. Not obfuscated after all. You just have to look at the source of the status frame. Thats real professional, Arcor.

11 January 2009

Patrick Schoenfeld: Syncing mails

Okay, so I had a simple job to be done. I have two mail accounts: A private mail account and a company mail account, which has two folders containing private mails. I want to synchronize these folders to my private account. Which is the right tool to choose? I thought that one way would be to fire up a graphical mail user agent, like lets say Thunderbird, setup the accounts and simply move the mails between them. But this has some implications, like:

So I decided that I needed some small tool, that UN*X admins are used to have. After a small apt-cache search, I found two tools:
I installed both and looked at them. For the impatient, a spoiler: I decided to use imapsync.
I had a quick look at imapcopy and it did not have a proper manpage.
Instead it refers to the built-in help (imapcopy -h) which is not useful either and to examples in /usr/share/doc.
After that I had look at imapsync. It comes with a pretty good manpage and a pretty good built-in usage information. Appearently I rate it very important that either the manpage or the built-in help are good enough to get started with a tool. Certainly I know that tools exist where a manpage is simply not enough, but I guess a tool to sync imap folders is not one of them.

After studying the manpage for about 2 minutes I was ready to construct a command line and give it a --dry try. This parameter lets me see what the tool would do if I would ommit it. That one looked good and so I gave it a shot. It then started to work. It has two flaws.
  1. Unfortunately it does not indicate its progress and the normal messages are not a good help either, because they contain numbers that actually do not refer to mails in one of the mailboxes (they are soon literally higher as the number of mails in both mailboxes) and I do understand what it is referring to.
  2. It sometimes crashed at random locations with random messages. I didn't look deeper into it, because restarting the script helped and therefore I cannot speak of a easy reproducible problem. In 3000 mails it happened about 1-2 times, so not a great deal but still annoying.
Anyway, it did the job, which took some time, because of my bandwith.

29 December 2008

Patrick Schoenfeld: New year resolutions

This post remembered me about new year resolutions. Literally my opinion about such resolutions is not that good. Its something quiet useless, because people often do promise resolutions to themselves with the intension to break it. Anyways...

I setup the following resolutions for myself:
Actually the last bullet is a joke, because I do not smoke (regular) since two years. So lets rephrase it:

23 December 2008

Patrick Schoenfeld: How to NOT improve relationships to your peer projects

I don't know if Aaron is an Ubuntu Developer. I hope not. And I hope that no one of the commenters is one. But this post and especially the comments to it (because the article may have some justification) is certainly not a way to improve the relationship between projects like Ubuntu and the projects its based on.

Yes, it is true, that we have a rough tone in our mailing lists and yes it is sad that every now and then topics pop up that need to be discussed till somebody is hurt and leaves (the project). I certainly hope that this can be improved over the time, but most likely this is not something which breaks the project. Who knows what its worth. We have a large community, very much different opinions and therefore conflicts. Possibly it enhances the quality of our distribution, because things that certainly would make our product worse (e.g. lowering quality standards) won't get done, because they'll get a strong opposition.

But whats the point with the innovations, Aaron? Is it the job of a distribution to serve with innovations? A distribution that serves as a base for more then 10 other distributions? A distribution that is known to care a lot about freedom and is known (and appreciated) for a high quality standard. I don't think we need to serve innovations. We served innovations back in a time where it was needed. Where things were missing. Like a good package management, like conf.d directories. Apart from this I think there were enough innovations in the past years. Possibly not that user-visible and trendy as a graphical virtualization frontend. What about xen-tools for example?
Now its all about reinventing the wheel or developing frontends. Launchpad for example is just reinventing the wheel. Savannah exists, GForge exists. Bazaar is just reinventing the wheel. Git exists. So does Mercurial.

But whats most sad about your posts and the comments is that you (and your commentors) seem to share the opinion that all Debian does is packaging software and nothing more. According to you Debian developers do not fix bugs, forward patches, file (important) wishlist bugs, encourage upstreams to remove horribly broken licensed software, improve their own software and tools (dpkg, aptitude, apt-get, the devscripts, ....), do integration work (alternatives, $EDITOR usage, ...), documentation work (writing manpages). The reality is that there are many small and greater things driven by Debian Developers that serve the whole community. Additional to the ground work that your favorite operating system is based on and without which it could not exist.
Its just not beeing doped like it is done by companies with their marketing departments that actually earn money with what they do. Take the projects you named. They all have one. They all have paid developers.

In any way such a ranting (with no real
constructiveness) will not serve the relationships between other projects and Debian.

16 December 2008

Patrick Schoenfeld: Re: How many bugs have you fixed today?

Bastian Venthur notices how the "How many bugs have you fixed today" question is used as a killer-argument against critism at the release-process.

I must confess that I also don't like this attitude of people to use such arguments as a form to silence others. It is something that only people should do, who enjoyed bad manners.
But OTOH it is really sad to see that always the same people are fixing rc bugs. I can only guess, but it seems like out of over 1000 developers eventually 100 are actively involved in the squashing of release-critical-bugs. That is a shame. And people who try to blame the release team for the consequences happening because of this only aren't really helpful, too. Its not like having a release team is meant to be a "5 people to rule/rescue the world" approach.

Bastian, the problem with your attitude is, that you adduct such arguments with it. If its not visible that you do anything more to get Lenny out, except pointing with the finger at the release team, saying "Ha! Ha!", and speculating how we will fail to release Lenny timely, then you shouldn't wonder. You should remember that you are a part of the "We", which is failing, when we as a project miss yet another release date.

And yes, I do agree that our release process is not really ideal and I think we should further push changes to optimize it (which already happened, given that the unblock-policy was way less strict as the years before), but where is your constructive contribution to that?

14 October 2008

Patrick Schoenfeld: Extending Debian with customizable packages

In my previous post I drafted a rough idea how to add a feature to Debian that would bridge the gap between Debian as a Binary Distribution and any Source Distribution. The feature in question is giving users the chance to build customized Debian packages for specific feature sets. The possible use cases are simple:

Lets say Anthony User needs super-duper-tool with postgresql support. However this is a not so common use case and so the archive only has postgresql support. If he now wants to build a custom package, the process is "as simple" as:

apt-get source super-duper-tool
cd super-duper-tool-*
apt-get build-dep super-duper-tool OR sudo mk-build-deps -r -i
(possibly try out which ./configure options are supported by super-duper-tool)
vi debian/rules
edit debian/rules
dpkg-buildpackage -us -uc
sudo debi

That is, to be honest lot of work. And the biggest problem about it is that you need to know all this, which is not really user knowledge.

The basic idea is that Anthony can do something like
export DEB_BUILD_OPTIONS=nomysql,postgresql
deb-build-tool build super-duper-tool

and is done.
So the question is how could this be achieved?

  1. Define a set of common options (options should be consistent through all source packages, otherwise it makes not that much sense) that every package should support, if it is a supported feature by the package in question. This includes flags like mysql, postgresql, ldap and its no-equivalents (nomysql, nopostgresql) to negate it.
  2. Enable use of this common options in the package build process and that is really the hardest part. Defining a lot of ifneq..findstring constructs for 2-10 options per source package in any available source package is.... not.... a good approach. Luckily a lot of packages uses autoconf where enabling and disabling features is as easy as adding a --with-foo or --enable-foo option to the ./configure parameters. So we could write a wrapper that would handle these options. The debhelper scripts already have dh_auto_configure: Maybe this could get enhanced.
  3. If it does not exist: Write a tool to auto-build packages with DEB_BUILD_OPTIONS set by the user. Should automatically get the source, (optional: setup/update a pbuilder environment) and build it and optional install/upload it (to a user repository for example) .
Still, the idea is only a rough draft. There are still a lot of open questions/issues, for example:
Is anybody interested in pushing this idea forward? I'm happily interested in it and I know people who like the idea, too. But I can't push this forward on my own all alone. Certainly the work won't affect Lenny anymore, but considerable the time frame that is planned for Lenny+1 by the release team, this could be a feature for Lenny+1.

13 October 2008

Patrick Schoenfeld: Re: Gentoo destroying earth?

I fully agree that working with Gentoo is no fun at all, as Julian points out in his post
My impression, when I tried Gentoo for the first (and last) time is the same. Thats not because it is made bad. In fact things in Gentoo aren't made that bad. There is good documentation, portage is quiet nice (with its USE flags and alike), but in the end I'm not really satisfied.
After all Gentoo uses a copied concept, which itself is good. The concept is derived from the BSD-world and is just the concept of source-based systems. This concept has its advantages over binary distributions, because it allows a flexibility that is not really possible with a binary distribution. That really is the only appreciable advantage of these systems. Users of these systems (including FreeBSD and alike) tend to give other arguments as well: Newest Software, Highest Performance and even Security is a point they give.

Usually the "newest software" argument is brought up with a rant against Debian. I keep hearing statements like "Debian Etch is totally outdated and Testing is broken". That statement in itself is just wrong, but the important thing is: What do you need such new software for? On a server its preferrable not to upgrade to each and every major release, for certain reasons. Its also a bad thing to build on production systems, so you need to take further measures to administer systems like that. On a desktop I can understand the logic, but then again: No. Why would I want a system I have to compile from scratch and on each upgrade (wasting a lot of time, power etc.) which changes often?

The performance argument is the dumbest of all. Binary distributions usually build binaries for various platforms. And while they can't be optimized for a very specific processor or a very specific feature set (=reduces binary size) they usually perform well enough that a difference between the self-compiled Gentoo system and a foreign-compiled Debian is not noticed by the user and sometimes not even measurable. So the time you save during the lifetime of your builds (which is not very long on a Desktop, is it?) because of the enhanced performance is used up a hundred times, by the time you waste for compiling the whole software only for yourself. Even if some of the software performs noticable better (e.g. video processing is said to benefit a lot from an optimized platform).

What about the security argument? It has been said, that you get your updates earlier then users of binary distributions. That is partly true, because in a binary distribution the update needs to be built first, before it gets to the end user (however, everything before is the same for a binary or a source distribution). But do you really follow security issues that close, that you benefit a lot from this? If you do, indeed, you save some hours. If (and only if) both distributions have a solution for a security issue at the same time.

Anyway. The flexibility argument can't get discussed away. Its the argument that makes Ports in BSD-systems attractive. You can have exactly the features you want, with the build-time options you want. In a comfortable manner. That is in contrast to binary distributions where we package developers need to guess which features might be needed (/wanted) by the users with mixed results. Good if they can decide for themselves. Still, I don't see the reason for compiling the whole system, if I need one or two customized components. So I go with Debian and customize packages if I really need/want to. The only difference is that it takes a lot of more effort.

I would love to see something done in Debian to reach a compromise. Making rebuilding of packages with different options very comfortable. We have DEB_BUILD_OPTIONS, maybe we should enhance it to support a lot more options then noopt,nostrip or nodoc. Possibly it would be a good thing to standardize on some use flags (e.g. [no]ssl, [no]ldap, etc.) and support them in the debian/rules file. This way building customized packages would be as easy as setting sensible DEB_BUILD_OPTIONS and run dpkg-buildpackage on the source. This could be eased further by providing a tool to download the source and build a given package (IIRC such a tool already exists). How does that sound?

7 October 2008

Neil Williams: I didn't know about . . .

It's really quite strange to come across a tool that is incredibly useful, does exactly what I need but of which I was completely unaware. This time it is manpages.debian.net - how long has this been around and why hasn't it been announced somewhere?
The best part is that if a manpage uses a See also section, the references to other manpages get turned into HTML links to the relevant manpage on the site.
Getting a manpage is now as simple as:
http://manpages.debian.net/cgi-bin/man.cgi?query=%s

which makes it trivial to put this onto the epiphany smart bookmark toolbar.
The site supports:

Thanks to Patrick Schoenfeld for inadvertently leading me to the site - I couldn't find who created the site so step up and be known.
Update
To those making comments on this entry, please note:
I have nothing directly to do with the site, I just found it and blogged about it so don't use my blog as a bug list. Ta. Such comments have been deleted and further comments are now disabled for this entry.

6 October 2008

Patrick Schoenfeld: Where do find help for commands?

Today an discussion rised up in #debian-devel on the OFTC IRC Network. It started because someone noted the bug report #501318 which is, to summarize it, just a user mistake, because someone obviously read the man page (time(1)) for the /usr/bin/time command, while time is also a shell builtin (which does not accept the same arguments as the time command. Certainly this bug reports appears to be funny at the first sight, but on the other hand, there is a suboptimal situation that leads to this.

  1. There are some builtin commands that also have binary equivalents (like time, printf, echo). Its quiet easy to tell weither the one or the other is used, by using the which command, but thats not really a realistic workflow, so its better to know this. The difference between these commands is often causing problems, which we have to cope with. For example bashs builtin echo behaves different as /bin/echo and people who use the bash as their default shell tend to use what bash provides, which in turn causes problems if other people who use a different default shell try to work with these scripts. But this is another problem, because...
  2. ... every program, utility and function in Debian has to provide a manpage, as said by our policy: "Each program, utility, and function should have an associated manual page included in the same package." I guess that the rationale behind that is that 'man' is a very common tool in the Un*x world, which is wide-spread and which usage is much recommended to find out how specific tools behave. It is always referred to in Documentation, weither it is Debian specific or not, e.g. in books.

    The time package, which is of priority Standard and which includes /usr/bin/time, does conform to that policy by providing a manpage for the time command. The various shells don't do that, because they usually don't have a man page for each and every command (usually they have a more generic manpage which includes the builtins or in some seldom cases a special manpage for such and similar things (as for zsh, which has zshmisc(1)) and because its not that easy, because the man command cannot (AFAIK) distinguish between the user joey_average calling 'man time' in a bash, while the user schoenfeld calls the command 'man time' in a zsh, or if joey_foobar calls 'man read' in a shell which does not have a builtin time command and uses /usr/bin/time instead.
So whats wrong about this situation? Would you say that users that expect 'man time' (or similar examples) to do the right thing are making wrong assumptions? I disagree. Its what they've always been told to do. And if it does not show anything eventually run info, or look at HTML documentation or what else. But they haven't been prepared for the case where the manpages does show something, something wrong. The good thing is that the manpage for time includes a sentence:

"Users of the bash shell need to use an explicit path in order to run the external time command and not the shell builtin variant."

The bad thing about it is, that it is at approx. 70-80% of the manpage.

Clint, the maintainer of zsh, mentioned run-help which seems to be a part of the zsh, but not of any other shell, and does more or less the right thing (at least for the builtins) but not for external commands and not even for itself (it opens the code of the function instead of something user-readable like a manpage). I guess "one tool for a specific need" is a good maxime, but is it really a good maxime for finding documentation?

But how could the situation be bettered? I could think of a wrapper for man, similar to run-help but as a more generic solution. Any ideas for it? Is it the right way at all to better the users experience? Any other ideas? Other opinions?

17 September 2008

Patrick Schoenfeld: Call me Mr. Raider, call me Mr. Wrong, ...

I know, I know. The "Its name is..." bandwagon, initiated by madduck, is already over. But I've seen so many interesting hostnames, that I'd like to contribute my hostnames as well. For my personal systems I usually use names of characters from the simpsons family: maggie, homer, moe is what I currently have. For systems in the company for which I work, the naming schema is usually more pragmatic: Starting with a three-character-string (the initials of the company I work for), followed by a dash and a suffix which identifies more or less the usage of the server or something that makes it special. An example for this scheme is imr-wsvn which is a VM hosting a web-svn tool. But there is also at least one exception to that scheme: I called my desktop at work teekanne (which is a german word meaning teapot) for no special reason.

Oh and there is another exception: Till yesterday I made a 2-weeks experiment using Fedora on my notebook in order to learn something about this system. I called this fedora system fixit.

16 September 2008

Patrick Schoenfeld: Hi Planet!

Today I got a mail from weasel, that my Debian account has been created. So in the end of my NM process everything went pretty fast. Now that I finally became a Debian Developer it feels like the end of a long journey, so I'd like to reflect on my process.

I started my "career" in Debian around 2006, when I got a co-maintainer of the smstools package. From then I got more involved with adopting some more packages etc. till I finally applied to become a DD on 31th August 2007. Until then most of my uploads where sponsored by Daniel Baumann (panthera), who therefore helped me learn a lot, and so he got my advocate. From that point on the most time of my NM process I've been waiting. Waiting for an AM, waiting for my AM (well, he also needed to wait for me because we both had busy times during my active processing), waiting for front desk. After all I had luck, because the DAM problem has been solved recently. I know that a part of the time I didn't feel like pushing my application forward fast, because I did not see where this would lead, except to a situation where I'd again wait for the DAM.

Now I'm quiet happy. And that is a good moment to thank some people, who helped me to get to this point: panthera for beeing my sponsor and advocate for some time, Thijs for beeing a sponsor and quiet helpful when it was about fixing security issues in mantis, naoliv for beeing a very reliable sponsor, pabs for beeing my AM, Myon for several actings on my NM application and off course the people involved in account creation. Thanks.

12 April 2008

Philipp Kern: Wrapping up Sarge into a nice package

We escorted Sarge to its last home. 3.1r8 is done, thanks to all the people who made it possible. A big thanks goes to James Troup, our ftpmaster of the day doing all the grunt work of getting a new point release out of the door. To bring in a more personal feeling of who makes this all possible, here is a list of people contributing uploads to 3.1r8 (mostly people from our fabulous Security Team): I would also like to thank dann frazier, Luk Claes, Martin Zobel-Helas and Neil McGovern for helping with the preparation of the point release.

Next.

Previous.